572 research outputs found

    Scene extraction in motion pictures

    Full text link
    This paper addresses the challenge of bridging the semantic gap between the rich meaning users desire when they query to locate and browse media and the shallowness of media descriptions that can be computed in today\u27s content management systems. To facilitate high-level semantics-based content annotation and interpretation, we tackle the problem of automatic decomposition of motion pictures into meaningful story units, namely scenes. Since a scene is a complicated and subjective concept, we first propose guidelines from fill production to determine when a scene change occurs. We then investigate different rules and conventions followed as part of Fill Grammar that would guide and shape an algorithmic solution for determining a scene. Two different techniques using intershot analysis are proposed as solutions in this paper. In addition, we present different refinement mechanisms, such as film-punctuation detection founded on Film Grammar, to further improve the results. These refinement techniques demonstrate significant improvements in overall performance. Furthermore, we analyze errors in the context of film-production techniques, which offer useful insights into the limitations of our method

    Automatic genre identification for content-based video categorization

    Full text link
    This paper presents a set of computational features originating from our study of editing effects, motion, and color used in videos, for the task of automatic video categorization. These features besides representing human understanding of typical attributes of different video genres, are also inspired by the techniques and rules used by many directors to endow specific characteristics to a genre-program which lead to certain emotional impact on viewers. We propose new features whilst also employing traditionally used ones for classification. This research, goes beyond the existing work with a systematic analysis of trends exhibited by each of our features in genres such as cartoons, commercials, music, news, and sports, and it enables an understanding of the similarities, dissimilarities, and also likely confusion between genres. Classification results from our experiments on several hours of video establish the usefulness of this feature set. We also explore the issue of video clip duration required to achieve reliable genre identification and demonstrate its impact on classification accuracy.<br /

    Finding the optimal temporal partitioning of video sequences

    Full text link
    The existing techniques for shot partitioning either process each shot boundary independently or proceed sequentially. The sequential process assumes the last shot boundary is correctly detected and utilizes the shot length distribution to adapt the threshold for detecting the next boundary. These techniques are only locally optimal and suffer from the strong assumption about the correct detection of the last boundary. Addressing these fundamental issues, in this paper, we aim to find the global optimal shot partitioning by utilizing Bayesian principles to model the probability of a particular video partition being the shot partition. A computationally efficient algorithm based on Dynamic Programming is then formulated. The experimental results on a large movie set show that our algorithm performs consistently better than the best adaptive-thresholding technique commonly used for the task

    Determining dramatic intensification via flashing lights in movies

    Full text link
    Movie directors and producers worldwide, in their quest to narrate a good story that warrants repeated audience viewing, use many cinematic elements to intensify and clarify the viewing experience. One such element that directors manipulate is lighting. In this paper we examine one aspect of lighting, namely flashing lights, and its role as an intensifier of dramatic effects in film. We present an algorithm for robust extraction of flashing lights and a simple mechanism to group detected flashing lights into flashing light scenes and analyze the role of these segments in story narration. In addition, we demonstrate how flashing lights detection can improve the performance of shot-based video segmentation. Experiments on a number of video sequences extracted from real movies yields good results. Our technique detects 90.4% of flashing lights. The detected flashing lights correctly eliminates 92.7% of false cuts in these sequences. In addition, data support is compiled to demonstrate the association between flashing light scenes and certain dramatic intensification events such as supernatural power, crisis or excitement.<br /

    Identifying film takes for cinematic analysis

    Full text link
    In this paper, we focus on the &lsquo;reverse editing&rsquo; problem in movie analysis, i.e., the extraction of film takes, original camera shots that a film editor extracts and arranges to produce a finished scene. The ability to disassemble final scenes and shots into takes is essential for nonlinear browsing, content annotation and the extraction of higher order cinematic constructs from film. In this work, we investigate agglomerative hierachical clustering methods along with different similarity metrics and group distances for this task, and demonstrate our findings with 10 movies.<br /

    Improved fade and dissolve detection for reliable video segmentation

    Full text link
    We present improved algorithms for automatic fade and dissolve detection in digital video analysis. We devise new two-step algorithms for fade and dissolve detection and introduce a method for eliminating false positives from a list of detected candidate transitions. In our detailed study of these gradual shot transitions, our objective has been to accurately classify the type of transitions (fade-in, fade-out, and dissolve) and to precisely locate the boundary of the transitions. This distinguishes our work from early work in scene change detection which focuses on identifying the existence of a transition rather than its precise temporal extent. We evaluate our algorithms against two other commonly used methods on a comprehensive data set, and demonstrate the improved performance due to our enhancements

    Neighborhood coherence and edge based approaches to film scene extraction

    Full text link
    In order to enable high-level semantics-based video annotation and interpretation, we tackle the problem of automatic decomposition of motion pictures into meaningful story units, namely scenes. Since a scene is a complicated and subjective concept, we first propose guidelines from film production to determine when a scene change occurs in film. We examine different rules and conventions followed as part of Film Grammar to guide and shape our algorithmic solution for determining a scene boundary. Two different techniques are proposed as new solutions in this paper. Our experimental results on 10 full-length movies show that our technique based on shot sequence coherence performs well and reasonably better than the color edges-based approach

    Film grammar based refinements to extracting scenes in motion pictures

    Full text link
    To enable high-level semantic indexing of video, we tackle the problem of automatically structuring motion pictures into meaningful story units, namely scenes. In our recent work, drawing guidance from film grammar, we proposed an algorithmic solution for extracting scenes in motion pictures based on a shot neighborhood color coherence measure. In this paper, we extend our work by presenting various refinement mechanisms, inspired by the knowledge of film devices that are brought to bear while crafting scenes, to further improve the results of the scene detection algorithm. We apply the enhanced algorithm to ten motion pictures and demonstrate the resulting improvements in performance.<br /

    Joint Attention for Automated Video Editing

    Get PDF
    International audienceJoint attention refers to the shared focal points of attention for occupants in a space. In this work, we introduce a computational definition of joint attention for the automated editing of meetings in multi-camera environments from the AMI corpus. Using extracted head pose and individual headset amplitude as features, we developed three editing methods: (1) a naive audio-based method that selects the camera using only the headset input, (2) a rule-based edit that selects cameras at a fixed pacing using pose data, and (3) an editing algorithm using LSTM (Long-short term memory) learned joint-attention from both pose and audio data, trained on expert edits. The methods are evaluated qualitatively against the human edit, and quantitatively in a user study with 22 participants. Results indicate that LSTM-trained joint attention produces edits that are comparable to the expert edit, offering a wider range of camera views than audio, while being more generalizable as compared to rule-based methods

    Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at √ s = 8 TeV with the ATLAS detector

    Get PDF
    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses 20.3 fb−1 of √ s = 8 TeV data collected in 2012 with the ATLAS detector at the LHC. Events are required to have at least one jet with pT > 120 GeV and no leptons. Nine signal regions are considered with increasing missing transverse momentum requirements between Emiss T > 150 GeV and Emiss T > 700 GeV. Good agreement is observed between the number of events in data and Standard Model expectations. The results are translated into exclusion limits on models with either large extra spatial dimensions, pair production of weakly interacting dark matter candidates, or production of very light gravitinos in a gauge-mediated supersymmetric model. In addition, limits on the production of an invisibly decaying Higgs-like boson leading to similar topologies in the final state are presente
    corecore